67 resultados para Vector quantization

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes the approach taken to the XML Mining track at INEX 2008 by a group at the Queensland University of Technology. We introduce the K-tree clustering algorithm in an Information Retrieval context by adapting it for document clustering. Many large scale problems exist in document clustering. K-tree scales well with large inputs due to its low complexity. It offers promising results both in terms of efficiency and quality. Document classification was completed using Support Vector Machines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Random Indexing K-tree is the combination of two algorithms suited for large scale document clustering.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present pyktree, an implementation of the K-tree algorithm in the Python programming language. The K-tree algorithm provides highly balanced search trees for vector quantization that scales up to very large data sets. Pyktree is highly modular and well suited for rapid-prototyping of novel distance measures and centroid representations. It is easy to install and provides a python package for library use as well as command line tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the past decade, plants have been used as expression hosts for the production of pharmaceutically important and commercially valuable proteins. Plants offer many advantages over other expression systems such as lower production costs, rapid scale up of production, similar post-translational modification as animals and the low likelihood of contamination with animal pathogens, microbial toxins or oncogenic sequences. However, improving recombinant protein yield remains one of the greatest challenges to molecular farming. In-Plant Activation (InPAct) is a newly developed technology that offers activatable and high-level expression of heterologous proteins in plants. InPAct vectors contain the geminivirus cis elements essential for rolling circle replication (RCR) and are arranged such that the gene of interest is only expressed in the presence of the cognate viral replication-associated protein (Rep). The expression of Rep in planta may be controlled by a tissue-specific, developmentally regulated or chemically inducible promoter such that heterologous protein accumulation can be spatially and temporally controlled. One of the challenges for the successful exploitation of InPAct technology is the control of Rep expression as even very low levels of this protein can reduce transformation efficiency, cause abnormal phenotypes and premature activation of the InPAct vector in regenerated plants. Tight regulation over transgene expression is also essential if expressing cytotoxic products. Unfortunately, many tissue-specific and inducible promoters are unsuitable for controlling expression of Rep due to low basal activity in the absence of inducer or in tissues other than the target tissue. This PhD aimed to control Rep activity through the production of single chain variable fragments (scFvs) specific to the motif III of Tobacco yellow dwarf virus (TbYDV) Rep. Due to the important role played by the conserved motif III in the RCR, it was postulated that such scFvs can be used to neutralise the activity of the low amount of Rep expressed from a “leaky” inducible promoter, thus preventing activation of the TbYDV-based InPAct vector until intentional induction. Such scFvs could also offer the potential to confer partial or complete resistance to TbYDV, and possibly heterologous viruses as motif III is conserved between geminiviruses. Studies were first undertaken to determine the levels of TbYDV Rep and TbYDV replication-associated protein A (RepA) required for optimal transgene expression from a TbYDV-based InPAct vector. Transient assays in a non-regenerable Nicotiana tabacum (NT-1) cell line were undertaken using a TbYDV-based InPAct vector containing the uidA reporter gene (encoding GUS) in combination with TbYDV Rep and RepA under the control of promoters with high (CaMV 35S) or low (Banana bunchy top virus DNA-R, BT1) activity. The replication enhancer protein of Tomato leaf curl begomovirus (ToLCV), REn, was also used in some co-bombardment experiments to examine whether RepA could be substituted by a replication enhancer from another geminivirus genus. GUS expression was observed both quantitatively and qualitatively by fluorometric and histochemical assays, respectively. GUS expression from the TbYDV-based InPAct vector was found to be greater when Rep was expected to be expressed at low levels (BT1 promoter) rather than high levels (35S promoter). GUS expression was further enhanced when Rep and RepA were co-bombarded with a low ratio of Rep to RepA. Substituting TbYDV RepA with ToLCV REn also enhanced GUS expression but more importantly highest GUS expression was observed when cells were co-transformed with expression vectors directing low levels of Rep and high levels of RepA irrespective of the level of REn. In this case, GUS expression was approximately 74-fold higher than that from a non-replicating vector. The use of different terminators, namely CaMV 35S and Nos terminators, in InPAct vectors was found to influence GUS expression. In the presence of Rep, GUS expression was greater using pInPActGUS-Nos rather than pInPActGUS-35S. The only instance of GUS expression being greater from vectors containing the 35S terminator was when comparing expression from cells transformed with Rep, RepA and REnexpressing vectors and either non-replicating vectors, p35SGS-Nos or p35SGS-35S. This difference was most likely caused by an interaction of viral replication proteins with each other and the terminators. These results indicated that (i) the level of replication associated proteins is critical to high transgene expression, (ii) the choice of terminator within the InPAct vector may affect expression levels and (iii) very low levels of Rep can activate InPAct vectors hence controlling its activity is critical. Prior to generating recombinant scFvs, a recombinant TbYDV Rep was produced in E. coli to act as a control to enable the screening for Rep-specific antibodies. A bacterial expression vector was constructed to express recombinant TbYDV Rep with an Nterminal His-tag (N-His-Rep). Despite investigating several purification techniques including Ni-NTA, anion exchange, hydrophobic interaction and size exclusion chromatography, N-His-Rep could only be partially purified using a Ni-NTA column under native conditions. Although it was not certain that this recombinant N-His-Rep had the same conformation as the native TbYDV Rep and was functional, results from an electromobility shift assay (EMSA) showed that N-His-Rep was able to interact with the TbYDV LIR and was, therefore, possibly functional. Two hybridoma cell lines from mice, immunised with a synthetic peptide containing the TbYDV Rep motif III amino acid sequence, were generated by GenScript (USA). Monoclonal antibodies secreted by the two hybridoma cell lines were first screened against denatured N-His-Rep in Western analysis. After demonstrating their ability to bind N-His-Rep, two scFvs (scFv1 and scFv2) were generated using a PCR-based approach. Whereas the variable heavy chain (VH) from both cell lines could be amplified, only the variable light chain (VL) from cell line 2 was amplified. As a result, scFv1 contained VH and VL from cell line 1, whereas scFv2 contained VH from cell line 2 and VL from cell line 1. Both scFvs were first expressed in E. coli in order to evaluate their affinity to the recombinant TbYDV N-His-Rep. The preliminary results demonstrated that both scFvs were able to bind to the denatured N-His-Rep. However, EMSAs revealed that only scFv2 was able to bind to native N-His-Rep and prevent it from interacting with the TbYDV LIR. Each scFv was cloned into plant expression vectors and co-bombarded into NT-1 cells with the TbYDV-based InPAct GUS expression vector and pBT1-Rep to examine whether the scFvs could prevent Rep from mediating RCR. Although it was expected that the addition of the scFvs would result in decreased GUS expression, GUS expression was found to slightly increase. This increase was even more pronounced when the scFvs were targeted to the cell nucleus by the inclusion of the Simian virus 40 large T antigen (SV40) nuclear localisation signal (NLS). It was postulated that the scFvs were binding to a proportion of Rep, leaving a small amount available to mediate RCR. The outcomes of this project provide evidence that very high levels of recombinant protein can theoretically be expressed using InPAct vectors with judicious selection and control of viral replication proteins. However, the question of whether the scFvs generated in this project have sufficient affinity for TbYDV Rep to prevent its activity in a stably transformed plant remains unknown. It may be that other scFvs with different combinations of VH and VL may have greater affinity for TbYDV Rep. Such scFvs, when expressed at high levels in planta, might also confer resistance to TbYDV and possibly heterologous geminiviruses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When classifying a signal, ideally we want our classifier to trigger a large response when it encounters a positive example and have little to no response for all other examples. Unfortunately in practice this does not occur with responses fluctuating, often causing false alarms. There exists a myriad of reasons why this is the case, most notably not incorporating the dynamics of the signal into the classification. In facial expression recognition, this has been highlighted as one major research question. In this paper we present a novel technique which incorporates the dynamics of the signal which can produce a strong response when the peak expression is found and essentially suppresses all other responses as much as possible. We conducted preliminary experiments on the extended Cohn-Kanade (CK+) database which shows its benefits. The ability to automatically and accurately recognize facial expressions of drivers is highly relevant to the automobile. For example, the early recognition of “surprise” could indicate that an accident is about to occur; and various safeguards could immediately be deployed to avoid or minimize injury and damage. In this paper, we conducted initial experiments on the extended Cohn-Kanade (CK+) database which shows its benefits.